34 research outputs found
Nearly Linear-Time, Parallelizable Algorithms for Non-Monotone Submodular Maximization
We study parallelizable algorithms for maximization of a submodular function,
not necessarily monotone, with respect to a cardinality constraint . We
improve the best approximation factor achieved by an algorithm that has optimal
adaptivity and query complexity, up to logarithmic factors in the size of
the ground set, from to . We provide two
algorithms; the first has approximation ratio , adaptivity , and query complexity , while the second has
approximation ratio , adaptivity , and query
complexity . Heuristic versions of our algorithms are empirically
validated to use a low number of adaptive rounds and total queries while
obtaining solutions with high objective value in comparison with highly
adaptive approximation algorithms.Comment: 24 pages, 2 figure
Unveiling the Limits of Learned Local Search Heuristics: Are You the Mightiest of the Meek?
In recent years, combining neural networks with local search heuristics has
become popular in the field of combinatorial optimization. Despite its
considerable computational demands, this approach has exhibited promising
outcomes with minimal manual engineering. However, we have identified three
critical limitations in the empirical evaluation of these integration attempts.
Firstly, instances with moderate complexity and weak baselines pose a challenge
in accurately evaluating the effectiveness of learning-based approaches.
Secondly, the absence of an ablation study makes it difficult to quantify and
attribute improvements accurately to the deep learning architecture. Lastly,
the generalization of learned heuristics across diverse distributions remains
underexplored. In this study, we conduct a comprehensive investigation into
these identified limitations. Surprisingly, we demonstrate that a simple
learned heuristic based on Tabu Search surpasses state-of-the-art (SOTA)
learned heuristics in terms of performance and generalizability. Our findings
challenge prevailing assumptions and open up exciting avenues for future
research and innovation in combinatorial optimization
Scalable Distributed Algorithms for Size-Constrained Submodular Maximization in the MapReduce and Adaptive Complexity Models
Distributed maximization of a submodular function in the MapReduce model has
received much attention, culminating in two frameworks that allow a centralized
algorithm to be run in the MR setting without loss of approximation, as long as
the centralized algorithm satisfies a certain consistency property - which had
only been shown to be satisfied by the standard greedy and continous greedy
algorithms. A separate line of work has studied parallelizability of submodular
maximization in the adaptive complexity model, where each thread may have
access to the entire ground set. For the size-constrained maximization of a
monotone and submodular function, we show that several sublinearly adaptive
algorithms satisfy the consistency property required to work in the MR setting,
which yields highly practical parallelizable and distributed algorithms. Also,
we develop the first linear-time distributed algorithm for this problem with
constant MR rounds. Finally, we provide a method to increase the maximum
cardinality constraint for MR algorithms at the cost of additional MR rounds.Comment: 45 pages, 6 figure
Pseudo-Separation for Assessment of Structural Vulnerability of a Network
Based upon the idea that network functionality is impaired if two nodes in a
network are sufficiently separated in terms of a given metric, we introduce two
combinatorial \emph{pseudocut} problems generalizing the classical min-cut and
multi-cut problems. We expect the pseudocut problems will find broad relevance
to the study of network reliability. We comprehensively analyze the
computational complexity of the pseudocut problems and provide three
approximation algorithms for these problems.
Motivated by applications in communication networks with strict
Quality-of-Service (QoS) requirements, we demonstrate the utility of the
pseudocut problems by proposing a targeted vulnerability assessment for the
structure of communication networks using QoS metrics; we perform experimental
evaluations of our proposed approximation algorithms in this context
Experimental studies of heavy-mineral transportation, segregation, and deposition in gravel-bed streams
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 1986.Microfiche copy available in Archives and ScienceVita.Includes bibliographies.by Roger Alan Kuhnle.Ph.D
Prefix-Free Parsing for Building Big BWTs
High-throughput sequencing technologies have led to explosive growth of genomic databases; one of which will soon reach hundreds of terabytes. For many applications we want to build and store indexes of these databases but constructing such indexes is a challenge. Fortunately, many of these genomic databases are highly-repetitive - a characteristic that can be exploited and enable the computation of the Burrows-Wheeler Transform (BWT), which underlies many popular indexes. In this paper, we introduce a preprocessing algorithm, referred to as prefix-free parsing, that takes a text T as input, and in one-pass generates a dictionary D and a parse P of T with the property that the BWT of T can be constructed from D and P using workspace proportional to their total size and O(|T|)-time. Our experiments show that D and P are significantly smaller than T in practice, and thus, can fit in a reasonable internal memory even when T is very large. Therefore, prefix-free parsing eases BWT construction, which is pertinent to many bioinformatics applications